Online Class-Incremental Continual Learning with Adversarial Shapley Value
نویسندگان
چکیده
As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need develop methods that continually learn data while minimizing memory footprint and power consumption. While replay techniques have shown exceptional promise for this task of continual learning, the best method selecting which buffered images still an open question. In paper, we specifically focus online class-incremental setting where model needs new classes stream. To end, contribute novel Adversarial Shapley value scoring scores samples according their ability preserve latent decision boundaries previously observed (to maintain stability avoid forgetting) interfering with current being learned encourage plasticity optimal class boundaries). Overall, observe our proposed ASER provides competitive or improved performance compared state-of-the-art replay-based variety datasets.
منابع مشابه
Continual Learning in Generative Adversarial Nets
Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. While the employed learning procedures typically assume that training data is drawn i.i.d. from the distribution of interest, it may be desirable to model distinct distributions which are observed sequentially, such as when different classes are encountered over time. Although cond...
متن کاملOnline Learning with Adversarial Delays
We study the performance of standard online learning algorithms when the feedback is delayed by an adversary. We show that online-gradient-descent [1] and follow-the-perturbed-leader [2] achieve regret O( √ D) in the delayed setting, where D is the sum of delays of each round’s feedback. This bound collapses to an optimal O( √ T ) bound in the usual setting of no delays (where D = T ). Our main...
متن کاملIncremental Classifier Learning with Generative Adversarial Networks
In this paper, we address the incremental classifier learning problem, which suffers from catastrophic forgetting. The main reason for catastrophic forgetting is that the past data are not available during learning. Typical approaches keep some exemplars for the past classes and use distillation regularization to retain the classification capability on the past classes and balance the past and ...
متن کاملOnline Incremental Feature Learning with Denoising Autoencoders
While determining model complexity is an important problem in machine learning, many feature learning algorithms rely on cross-validation to choose an optimal number of features, which is usually challenging for online learning from a massive stream of data. In this paper, we propose an incremental feature learning algorithm to determine the optimal model complexity for large-scale, online data...
متن کاملPerturbation Algorithms for Adversarial Online Learning
Perturbation Algorithms for Adversarial Online Learning
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i11.17159